87 research outputs found

    Functional consequences of inhibitory plasticity: homeostasis, the excitation-inhibition balance and beyond

    Get PDF
    Computational neuroscience has a long-standing tradition of investigating the consequences of excitatory synaptic plasticity. In contrast, the functions of inhibitory plasticity are still largely nebulous, particularly given the bewildering diversity of interneurons in the brain. Here, we review recent computational advances that provide first suggestions for the functional roles of inhibitory plasticity, such as a maintenance of the excitation-inhibition balance, a stabilization of recurrent network dynamics and a decorrelation of sensory responses. The field is still in its infancy, but given the existing body of theory for excitatory plasticity, it is likely to mature quickly and deliver important insights into the self-organization of inhibitory circuits in the brain

    Understanding Slow Feature Analysis: A Mathematical Framework

    Get PDF
    Slow feature analysis is an algorithm for unsupervised learning of invariant representations from data with temporal correlations. Here, we present a mathematical analysis of slow feature analysis for the case where the input-output functions are not restricted in complexity. We show that the optimal functions obey a partial differential eigenvalue problem of a type that is common in theoretical physics. This analogy allows the transfer of mathematical techniques and intuitions from physics to concrete applications of slow feature analysis, thereby providing the means for analytical predictions and a better understanding of simulation results. We put particular emphasis on the situation where the input data are generated from a set of statistically independent sources.\ud The dependence of the optimal functions on the sources is calculated analytically for the cases where the sources have Gaussian or uniform distribution

    Learning place cells, grid cells and invariances with excitatory and inhibitory plasticity

    Get PDF
    Neurons in the hippocampus and adjacent brain areas show a large diversity in their tuning to location and head direction, and the underlying circuit mechanisms are not yet resolved. In particular, it is unclear why certain cell types are selective to one spatial variable, but invariant to another. For example, place cells are typically invariant to head direction. We propose that all observed spatial tuning patterns – in both their selectivity and their invariance – arise from the same mechanism: Excitatory and inhibitory synaptic plasticity driven by the spatial tuning statistics of synaptic inputs. Using simulations and a mathematical analysis, we show that combined excitatory and inhibitory plasticity can lead to localized, grid-like or invariant activity. Combinations of different input statistics along different spatial dimensions reproduce all major spatial tuning patterns observed in rodents. Our proposed model is robust to changes in parameters, develops patterns on behavioral timescales and makes distinctive experimental predictions.BMBF, 01GQ1201, Lernen und Gedächtnis in balancierten Systeme

    Slowness: An Objective for Spike-Timing-Dependent Plasticity?

    Get PDF
    Slow Feature Analysis (SFA) is an efficient algorithm for learning input-output functions that extract the most slowly varying features from a quickly varying signal. It has been successfully applied to the unsupervised learning of translation-, rotation-, and other invariances in a model of the visual system, to the learning of complex cell receptive fields, and, combined with a sparseness objective, to the self-organized formation of place cells in a model of the hippocampus. In order to arrive at a biologically more plausible implementation of this learning rule, we consider analytically how SFA could be realized in simple linear continuous and spiking model neurons. It turns out that for the continuous model neuron SFA can be implemented by means of a modified version of standard Hebbian learning. In this framework we provide a connection to the trace learning rule for invariance learning. We then show that for Poisson neurons spike-timing-dependent plasticity (STDP) with a specific learning window can learn the same weight distribution as SFA. Surprisingly, we find that the appropriate learning rule reproduces the typical STDP learning window. The shape as well as the timescale are in good agreement with what has been measured experimentally. This offers a completely novel interpretation for the functional role of spike-timing-dependent plasticity in physiological neurons

    Slowness and Sparseness Lead to Place, Head-Direction, and Spatial-View Cells

    Get PDF
    We present a model for the self-organized formation of place cells, head-direction cells, and spatial-view cells in the hippocampal formation based on unsupervised learning on quasi-natural visual stimuli. The model comprises a hierarchy of Slow Feature Analysis (SFA) nodes, which were recently shown to reproduce many properties of complex cells in the early visual system. The system extracts a distributed grid-like representation of position and orientation, which is transcoded into a localized place-field, head-direction, or view representation, by sparse coding. The type of cells that develops depends solely on the relevant input statistics, i.e., the movement pattern of the simulated animal. The numerical simulations are complemented by a mathematical analysis that allows us to accurately predict the output of the top SFA laye

    An Extension of Slow Feature Analysis for Nonlinear Blind Source Separation

    Get PDF
    We present and test an extension of slow feature analysis as a novel approach to nonlinear blind source separation. The algorithm relies on temporal correlations and iteratively reconstructs a set of statistically independent sources from arbitrary nonlinear instantaneous mixtures. Simulations show that it is able to invert a complicated nonlinear mixture of two audio signals with a reliability of more than 9090\%. The algorithm is based on a mathematical analysis of slow feature analysis for the case of input data that are generated from statistically independent sources

    Cortical interneurons: fit for function and fit to function? Evidence from development and evolution

    Get PDF
    Cortical inhibitory interneurons form a broad spectrum of subtypes. This diversity suggests a division of labor, in which each cell type supports a distinct function. In the present era of optimisation-based algorithms, it is tempting to speculate that these functions were the evolutionary or developmental driving force for the spectrum of interneurons we see in the mature mammalian brain. In this study, we evaluated this hypothesis using the two most common interneuron types, parvalbumin (PV) and somatostatin (SST) expressing cells, as examples. PV and SST interneurons control the activity in the cell bodies and the apical dendrites of excitatory pyramidal cells, respectively, due to a combination of anatomical and synaptic properties. But was this compartment-specific inhibition indeed the function for which PV and SST cells originally evolved? Does the compartmental structure of pyramidal cells shape the diversification of PV and SST interneurons over development? To address these questions, we reviewed and reanalyzed publicly available data on the development and evolution of PV and SST interneurons on one hand, and pyramidal cell morphology on the other. These data speak against the idea that the compartment structure of pyramidal cells drove the diversification into PV and SST interneurons. In particular, pyramidal cells mature late, while interneurons are likely committed to a particular fate (PV vs. SST) during early development. Moreover, comparative anatomy and single cell RNA-sequencing data indicate that PV and SST cells, but not the compartment structure of pyramidal cells, existed in the last common ancestor of mammals and reptiles. Specifically, turtle and songbird SST cells also express the Elfn1 and Cbln4 genes that are thought to play a role in compartment-specific inhibition in mammals. PV and SST cells therefore evolved and developed the properties that allow them to provide compartment-specific inhibition before there was selective pressure for this function. This suggest that interneuron diversity originally resulted from a different evolutionary driving force and was only later co-opted for the compartment-specific inhibition it seems to serve in mammals today. Future experiments could further test this idea using our computational reconstruction of ancestral Elfn1 protein sequences.Peer Reviewe

    Cortical interneurons: fit for function and fit to function? Evidence from development and evolution

    Get PDF
    Cortical inhibitory interneurons form a broad spectrum of subtypes. This diversity suggests a division of labor, in which each cell type supports a distinct function. In the present era of optimisation-based algorithms, it is tempting to speculate that these functions were the evolutionary or developmental driving force for the spectrum of interneurons we see in the mature mammalian brain. In this study, we evaluated this hypothesis using the two most common interneuron types, parvalbumin (PV) and somatostatin (SST) expressing cells, as examples. PV and SST interneurons control the activity in the cell bodies and the apical dendrites of excitatory pyramidal cells, respectively, due to a combination of anatomical and synaptic properties. But was this compartment-specific inhibition indeed the function for which PV and SST cells originally evolved? Does the compartmental structure of pyramidal cells shape the diversification of PV and SST interneurons over development? To address these questions, we reviewed and reanalyzed publicly available data on the development and evolution of PV and SST interneurons on one hand, and pyramidal cell morphology on the other. These data speak against the idea that the compartment structure of pyramidal cells drove the diversification into PV and SST interneurons. In particular, pyramidal cells mature late, while interneurons are likely committed to a particular fate (PV vs. SST) during early development. Moreover, comparative anatomy and single cell RNA-sequencing data indicate that PV and SST cells, but not the compartment structure of pyramidal cells, existed in the last common ancestor of mammals and reptiles. Specifically, turtle and songbird SST cells also express the Elfn1 and Cbln4 genes that are thought to play a role in compartment-specific inhibition in mammals. PV and SST cells therefore evolved and developed the properties that allow them to provide compartment-specific inhibition before there was selective pressure for this function. This suggest that interneuron diversity originally resulted from a different evolutionary driving force and was only later co-opted for the compartment-specific inhibition it seems to serve in mammals today. Future experiments could further test this idea using our computational reconstruction of ancestral Elfn1 protein sequences

    On the Relation of Slow Feature Analysis and Laplacian Eigenmaps

    Get PDF
    The past decade has seen a rise of interest in Laplacian eigenmaps (LEMs) for nonlinear dimensionality reduction. LEMs have been used in spectral clustering, in semisupervised learning, and for providing efficient state representations for reinforcement learning. Here, we show that LEMs are closely related to slow feature analysis (SFA), a biologically inspired, unsupervised learning algorithm originally designed for learning invariant visual representations. We show that SFA can be interpreted as a function approximation of LEMs, where the topological neighborhoods required for LEMs are implicitly defined by the temporal structure of the data. Based on this relation, we propose a generalization of SFA to arbitrary neighborhood relations and demonstrate its applicability for spectral clustering. Finally, we review previous work with the goal of providing a unifying view on SFA and LEMs

    Lottery Tickets in Evolutionary Optimization: On Sparse Backpropagation-Free Trainability

    Full text link
    Is the lottery ticket phenomenon an idiosyncrasy of gradient-based training or does it generalize to evolutionary optimization? In this paper we establish the existence of highly sparse trainable initializations for evolution strategies (ES) and characterize qualitative differences compared to gradient descent (GD)-based sparse training. We introduce a novel signal-to-noise iterative pruning procedure, which incorporates loss curvature information into the network pruning step. This can enable the discovery of even sparser trainable network initializations when using black-box evolution as compared to GD-based optimization. Furthermore, we find that these initializations encode an inductive bias, which transfers across different ES, related tasks and even to GD-based training. Finally, we compare the local optima resulting from the different optimization paradigms and sparsity levels. In contrast to GD, ES explore diverse and flat local optima and do not preserve linear mode connectivity across sparsity levels and independent runs. The results highlight qualitative differences between evolution and gradient-based learning dynamics, which can be uncovered by the study of iterative pruning procedures.Comment: 13 pages, 11 figures, International Conference on Machine Learning (ICML) 202
    corecore